符号检测是现代通信系统中的一个基本且具有挑战性的问题,例如多源多输入多输出(MIMO)设置。迭代软干扰取消(SIC)是该任务的最新方法,最近动机的数据驱动的神经网络模型,例如深度,可以处理未知的非线性通道。但是,这些神经网络模型需要在应用之前对网络进行全面的时间量培训,因此在实践中不容易适合高度动态的渠道。我们介绍了一个在线培训框架,该框架可以迅速适应频道中的任何更改。我们提出的框架将最近的深层发展方法与新兴的生成对抗网络(GAN)统一,以捕获频道中的任何变化,并快速调整网络以维持模型的最佳性能。我们证明,我们的框架在高度动态的通道上显着优于最近的神经网络模型,甚至超过了我们实验中静态通道上的神经网络模型。
translated by 谷歌翻译
自我监督学习(SSL)利用基础数据结构来生成培训深网络的监督信号。这种方法提供了一种实用的解决方案,可用于学习多重免疫荧光大脑图像,其中数据通常比人类专家注释更丰富。基于对比度学习和图像重建的SSL算法表现出令人印象深刻的性能。不幸的是,这些方法是在自然图像而不是生物医学图像上设计和验证的。最近的一些作品已应用SSL来分析细胞图像。然而,这些作品均未研究SSL对多重免疫荧光脑图像的研究。这些作品还没有为采用特定的SSL方法提供明确的理论理由。在这些局限性的激励下,我们的论文介绍了从信息理论观点开发的一种自我监督的双损坏自适应掩盖自动编码器(DAMA)算法。 Dama的目标函数通过最大程度地降低像素级重建和特征级回归中的条件熵来最大化相互信息。此外,Dama还引入了一种新型的自适应掩码采样策略,以最大程度地提高相互信息并有效地学习脑细胞数据上下文信息。我们首次在多重免疫荧光脑图像上提供了SSL算法的广泛比较。我们的结果表明,Dama优于细胞分类和分割任务的其他SSL方法。 Dama还可以在Imagenet-1k上实现竞争精确度。 Dama的源代​​码可在https://github.com/hula-ai/dama上公开获得
translated by 谷歌翻译
纠错码是现代通信系统中的基本组件,要求极高的吞吐量,超可靠性和低延迟。随着解码器的近期使用机器学习(ML)模型的方法提供了改进的性能和对未知环境的巨大适应性,传统的解码器斗争。我们介绍了一般框架,以进一步提高ML模型的性能和适用性。我们建议将ML解码器与竞争鉴别器网络组合,该网络试图区分码字和嘈杂的单词,因此,指导解码模型以恢复传输的码字。我们的框架是游戏理论,由生成的对抗网络(GANS)有动力,解码器和鉴别者在零和游戏中竞争。解码器学习同时解码和生成码字,而鉴别器学会讲述解码输出和码字之间的差异。因此,解码器能够将嘈杂的接收信号解码为码字,增加成功解码的概率。我们通过证明这解码器定义了我们游戏的NASH均衡点,我们与最佳最大可能性解码器展示了我们的框架的强烈连接。因此,培训均衡具有实现最佳最大可能性性能的良好可能性。此外,我们的框架不需要培训标签,这些标签通常在通信期间通常不可用,因此似乎可以在线培训并适应频道动态。为了展示我们框架的表现,我们将其与最近的神经解码器相结合,并与各种代码上的原始模型和传统解码算法相比,表现出改进的性能。
translated by 谷歌翻译
由于其灵活,安全,表现特性,Edge Computing彻底改变了移动和无线网络世界的世界。最近,我们目睹了越来越多的利用,使得更加努力部署机器学习(ML)技术,例如联邦学习(FL)。与传统的分布式机器学习(ML)相比,FL被宣告以提高通信效率。原始FL假定中央聚合服务器,以聚合本地优化的参数,可能会带来可靠性和延迟问题。在本文中,我们对策略进行了深入的研究,以通过基于当前参与者和/或可用资源进行动态选择的飞行主服务器来替换这一中央服务器。具体来说,我们比较不同的指标来选择该飞行主机并评估共识算法以执行选择。我们的结果表明,使用我们的飞行大师FL框架的运行时显着减少了与我们的EDGEAI测试的测量结果和使用操作边缘测试的Real 5G网络进行的测量结果相比。
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
We introduce an approach for the answer-aware question generation problem. Instead of only relying on the capability of strong pre-trained language models, we observe that the information of answers and questions can be found in some relevant sentences in the context. Based on that, we design a model which includes two modules: a selector and a generator. The selector forces the model to more focus on relevant sentences regarding an answer to provide implicit local information. The generator generates questions by implicitly combining local information from the selector and global information from the whole context encoded by the encoder. The model is trained jointly to take advantage of latent interactions between the two modules. Experimental results on two benchmark datasets show that our model is better than strong pre-trained models for the question generation task. The code is also available (shorturl.at/lV567).
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This paper aims to improve the Warping Planer Object Detection Network (WPOD-Net) using feature engineering to increase accuracy. What problems are solved using the Warping Object Detection Network using feature engineering? More specifically, we think that it makes sense to add knowledge about edges in the image to enhance the information for determining the license plate contour of the original WPOD-Net model. The Sobel filter has been selected experimentally and acts as a Convolutional Neural Network layer, the edge information is combined with the old information of the original network to create the final embedding vector. The proposed model was compared with the original model on a set of data that we collected for evaluation. The results are evaluated through the Quadrilateral Intersection over Union value and demonstrate that the model has a significant improvement in performance.
translated by 谷歌翻译
Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdoor triggers against malware classifiers. To address this trade-off, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that we establish a defense, called XRand, against such attacks. We show that our mechanism restricts the information that the adversary can learn about the top important features, while maintaining the faithfulness of the explanations.
translated by 谷歌翻译